2,567 research outputs found

    A Communication Channel Density Estimating Generative Adversarial Network

    Get PDF
    Autoencoder-based communication systems use neural network channel models to backwardly propagate message reconstruction error gradients across an approximation of the physical communication channel. In this work, we develop and test a new generative adversarial network (GAN) architecture for the purpose of training a stochastic channel approximating neural network. In previous research, investigators have focused on additive white Gaussian noise (AWGN) channels and/or simplified Rayleigh fading channels, both of which are linear and have well defined analytic solutions. Given that training a neural network is computationally expensive, channel approximation networks and more generally the autoencoder systemsshould be evaluated in communication environments that are traditionally difficult. To that end, our investigation focuses on channels that contain a combination of non-linear amplifier distortion, pulse shape filtering, intersymbol interference, frequency-dependent group delay, multipath, and non-Gaussian statistics. Each of our models are trained without any prior knowledge of the channel. We show that the trained models have learned to generalize over an arbitrary amplifier drive level and constellation alphabet. We demonstrate the versatility of our GAN architecture by comparing the marginal probability density function of several channel simulations with that of their corresponding neural network approximation

    Modulation Classification of Satellite Communication Signals Using Cumulants and Neural Networks

    Get PDF
    National Aeronautics and Space Administration (NASA)'s future communication architecture is evaluating cognitive technologies and increased system intelligence. These technologies are expected to reduce the operational complexity of the network, increase science data return, and reduce interference to self and others. In order to increase situational awareness, signal classification algorithms could be applied to identify users and distinguish sources of interference. A significant amount of previous work has been done in the area of automatic signal classification for military and commercial applications. As a preliminary step, we seek to develop a system with the ability to discern signals typically encountered in satellite communication. Proposed is an automatic modulation classifier which utilizes higher order statistics (cumulants) and an estimate of the signal-to-noise ratio. These features are extracted from baseband symbols and then processed by a neural network for classification. The modulation types considered are phase-shift keying (PSK), amplitude and phase-shift keying (APSK),and quadrature amplitude modulation (QAM). Physical layer properties specific to the Digital Video Broadcasting - Satellite- Second Generation (DVB-S2) standard, such as pilots and variable ring ratios, are also considered. This paper will provide simulation results of a candidate modulation classifier, and performance will be evaluated over a range of signal-to-noise ratios, frequency offsets, and nonlinear amplifier distortions

    Receiver-Driven Video Adaptation

    Get PDF
    In the span of a single generation, video technology has made an incredible impact on daily life. Modern use cases for video are wildly diverse, including teleconferencing, live streaming, virtual reality, home entertainment, social networking, surveillance, body cameras, cloud gaming, and autonomous driving. As these applications continue to grow more sophisticated and heterogeneous, a single representation of video data can no longer satisfy all receivers. Instead, the initial encoding must be adapted to each receiver's unique needs. Existing adaptation strategies are fundamentally flawed, however, because they discard the video's initial representation and force the content to be re-encoded from scratch. This process is computationally expensive, does not scale well with the number of videos produced, and throws away important information embedded in the initial encoding. Therefore, a compelling need exists for the development of new strategies that can adapt video content without fully re-encoding it. To better support the unique needs of smart receivers, diverse displays, and advanced applications, general-use video systems should produce and offer receivers a more flexible compressed representation that supports top-down adaptation strategies from an original, compressed-domain ground truth. This dissertation proposes an alternate model for video adaptation that addresses these challenges. The key idea is to treat the initial compressed representation of a video as the ground truth, and allow receivers to drive adaptation by dynamically selecting which subsets of the captured data to receive. In support of this model, three strategies for top-down, receiver-driven adaptation are proposed. First, a novel, content-agnostic entropy coding technique is implemented in which symbols are selectively dropped from an input abstract symbol stream based on their estimated probability distributions to hit a target bit rate. Receivers are able to guide the symbol dropping process by supplying the encoder with an appropriate rate controller algorithm that fits their application needs and available bandwidths. Next, a domain-specific adaptation strategy is implemented for H.265/HEVC coded video in which the prediction data from the original source is reused directly in the adapted stream, but the residual data is recomputed as directed by the receiver. By tracking the changes made to the residual, the encoder can compensate for decoder drift to achieve near-optimal rate-distortion performance. Finally, a fully receiver-driven strategy is proposed in which the syntax elements of a pre-coded video are cataloged and exposed directly to clients through an HTTP API. Instead of requesting the entire stream at once, clients identify the exact syntax elements they wish to receive using a carefully designed query language. Although an implementation of this concept is not provided, an initial analysis shows that such a system could save bandwidth and computation when used by certain targeted applications.Doctor of Philosoph

    Futures Prices in Supply Analysis: Are Instrumental Variables Necessary?

    Get PDF
    Citation: Nathan P. Hendricks, Joseph P. Janzen, Aaron Smith; Futures Prices in Supply Analysis: Are Instrumental Variables Necessary?, American Journal of Agricultural Economics, Volume 97, Issue 1, 1 January 2015, Pages 22–39, https://doi.org/10.1093/ajae/aau062Crop yield shocks are partially predictable—high planting-time futures prices have tended to indicate that yield would be below trend. As a result, regressions of total caloric production on futures prices produce estimates of the supply elasticity that are biased downwards by up to 75%. Regressions of the world’s growing area on futures prices have a much smaller bias of about 20% because although yield shocks are partially predictable, this predictability has a relatively small effect on land allocation. We argue that the preferred method for estimating the crop supply elasticity is to use regressions of growing area on futures prices and to include the realized yield shock as a control variable. An alternative method for bias reduction is to use instrumental variables (IVs). We show that the marginal contribution of an IV to bias reduction is small—IVs are not necessary for futures prices in supply analysis

    Quantitative Research Methods for Political Science, Public Policy and Public Administration for Undergraduates: 1st Edition With Applications in R

    Get PDF
    Quantitative Research Methods for Political Science, Public Policy and Public Administration for Undergraduates: 1st Edition With Applications in R is an adaption of Quantitative Research Methods for Political Science, Public Policy and Public Administration (With Applications in R). The focus of this book is on using quantitative research methods to test hypotheses and build theory in political science, public policy and public administration. This new version of the text omits large portions of the original text that focused on calculus and linear algebra, expands and reorganizes the content on the software system R and includes guided study questions at the end of each chapter.https://dc.etsu.edu/etsu-oer/1004/thumbnail.jp

    Prospectus, February 16, 2005

    Get PDF
    https://spark.parkland.edu/prospectus_2005/1004/thumbnail.jp

    Quantitative Research Methods for Political Science, Public Policy and Public Administration for Undergraduates: 1st Edition With Applications in Excel

    Get PDF
    Quantitative Research Methods for Political Science, Public Policy and Public Administration for Undergraduates: 1st Edition With Applications in Excel is an adaption of Quantitative Research Methods for Political Science, Public Policy and Public Administration (With Applications in R). The focus of this book is on using quantitative research methods to test hypotheses and build theory in political science, public policy and public administration. This new version is designed specifically for undergraduate courses. It omits large portions of the original text that focused on calculus and linear algebra, expands and reorganizes the content on the software system by shifting to Excel and includes guided study questions at the end of each chapter.https://dc.etsu.edu/etsu-oer/1003/thumbnail.jp

    The Type Ic Supernova 1994I in M51: Detection of Helium and Spectral Evolution

    Get PDF
    We present a series of spectra of SN 1994I in M51, starting 1 week prior to maximum brightness. The nebular phase began about 2 months after the explosion; together with the rapid decline of the optical light, this suggests that the ejected mass was small. Although lines of He I in the optical region are weak or absent, consistent with the Type Ic classification, we detect strong He I λ10830 absorption during the first month past maximum. Thus, if SN 1994I is a typical Type Ic supernova, the atmospheres of these objects cannot be completely devoid of helium. The emission-line widths are smaller than predicted by the model of Nomoto and coworkers, in which the iron core of a low-mass carbon-oxygen star collapses. They are, however, larger than in Type Ib supernovae

    The Impact of Assuming Flatness in the Determination of Neutrino Properties from Cosmological Data

    Get PDF
    Cosmological data have provided new constraints on the number of neutrino species and the neutrino mass. However these constraints depend on assumptions related to the underlying cosmology. Since a correlation is expected between the number of effective neutrinos N_{eff}, the neutrino mass \sum m_\nu, and the curvature of the universe \Omega_k, it is useful to investigate the current constraints in the framework of a non-flat universe. In this paper we update the constraints on neutrino parameters by making use of the latest cosmic microwave background (CMB) data from the ACT and SPT experiments and consider the possibility of a universe with non-zero curvature. We first place new constraints on N_{eff} and \Omega_k, with N_{eff} = 4.03 +/- 0.45 and 10^3 \Omega_k = -4.46 +/- 5.24. Thus, even when \Omega_k is allowed to vary, N_{eff} = 3 is still disfavored with 95% confidence. We then investigate the correlation between neutrino mass and curvature that shifts the 95% upper limit of \sum m_\nu < 0.45 eV to \sum m_\nu < 0.95 eV. Thus, the impact of assuming flatness in neutrino cosmology is significant and an essential consideration with future experiments.Comment: 6 pages. 4 figures. Submitted to PR

    Prospectus, April 20, 2005

    Get PDF
    https://spark.parkland.edu/prospectus_2005/1010/thumbnail.jp
    • …
    corecore